Distributed Averaging Via Lifted Markov Chains
نویسندگان
چکیده
منابع مشابه
Fast and Slim Lifted Markov Chains
Metropolis-Hasting method allows for designing a reversible Markov chain P on a given graph G for a target stationary distribution π. Such a Markov chain may suffer from its slow mixing time due to reversibility. Diaconis, Holmes and Neal (1997) for the ring-like chain P , and later Chen, Lovasz and Pak (2002) for an arbitrary chain P provided an explicit construction of a non-reversible Markov...
متن کاملDistributed Markov Chains
The formal verification of large probabilistic models is challenging. Exploiting the concurrency that is often present is one way to address this problem. Here we study a class of communicating probabilistic agents in which the synchronizations determine the probability distribution for the next moves of the participating agents. The key property of this class is that the synchronizations are d...
متن کاملProbabilistic XML via Markov Chains
We show how Recursive Markov Chains (RMCs) and their restrictions can define probabilistic distributions over XML documents, and study tractability of querying over such models. We show that RMCs subsume several existing probabilistic XML models. In contrast to the latter, RMC models (i) capture probabilistic versions of XML schema languages such as DTDs, (ii) can be exponentially more succinct...
متن کاملAn Interruptible Algorithm for Perfect Sampling via Markov Chains Short Title: Perfect Sampling via Markov Chains
For a large class of examples arising in statistical physics known as attractive spin systems (e.g., the Ising model), one seeks to sample from a probability distribution π on an enormously large state space, but elementary sampling is ruled out by the infeasibility of calculating an appropriate normalizing constant. The same difficulty arises in computer science problems where one seeks to sam...
متن کاملRandomized Distributed Algorithms As Markov Chains
Distributed randomized algorithms, when they operate under a memoryless scheduler, behave as finite Markov chains: the probability at n-th step to go from a configuration x to another one y is a constant p that depends on x and y only. By Markov theory, we thus know that, no matter where the algorithm starts, the probability for the algorithm to be after n steps in a “recurrent” configuration t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Information Theory
سال: 2010
ISSN: 0018-9448,1557-9654
DOI: 10.1109/tit.2009.2034777